perm filename KATZ[S84,JMC] blob
sn#752011 filedate 1984-04-25 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 katz[s84,jmc] Where do Katz and Chomsky leave AI
C00010 ENDMK
Cā;
katz[s84,jmc] Where do Katz and Chomsky leave AI
friends@csli
Where do Katz and Chomsky leave AI?
I missed the April 19 TINLUNCH, but the reading raised
some questions I have been thinking about. Also I apologize
for violating the custom of mailing only adminstrative
communications to the CSLI mailing lists. If this communication
seems inappropriate, perhaps another mechanism can be devised
to make our electronic facilities available for substantive
discussion, e.g. a CSLI scientific BBOARD.
Reading "An Outline of Platonist Grammar" by Katz leaves
me out in the cold. Namely, theories of language suggested by
AI seem to be neither Platonist in his sense nor conceptualist
in the sense he ascribes to Chomsky. The views I have seen and
heard expressed by Chomskyans similarly leave me puzzled.
Suppose we look at language from the point of view of
design. We intend to build some robots, and to do their jobs
they will have to communicate with one another. We suppose
that two robots that have learned from their experience for
twenty years are to be able to communicate when they meet.
What kind of a language shall we give them.
It seems that it isn't easy to design a useful language
for these robots, and that such a language will have to satisfy
a number of constraints if it is to work correctly. Our idea
is that the characteristics of human language are also determined
by such constraints, and linguists should attempt to discover them.
They aren't psychological in any simple sense, because they will
apply regardless of whether the communicators are made of meat or silicon.
Where do these constraints come from?
Each communicator is in its own epistemological situation.
For example, it has perceived certain objects. Their images
and the internal descriptions of the objects inferred from these
images occupy certain locations in its memory. It refers to them
internally by pointers to these locations. However, these locations
will be meaningless to another robot even of identical design, because
the robots view the scene from different angles. Therefore, a robot
communicating with another robot, just like a human communicating
with another human, must generate and transmit descriptions in
some language that is public in the robot community. The language
of these descriptions must be flexible enough so that a robot can
make them just detailed enough to avoid ambiguity in the given
situation. If the robot is making descriptions that are intended
to be read by robots not present in the situations, the descriptions
are subject to different constraints.
Consider the division of certain words into adjectives
and nouns in natural languages. From a certain logical point
of view this division is superfluous, because both kinds of
words can be regarded as predicates. However, this logical
point of fails to take into account the actual epistemological
situation. This situation may be that usually an object is
appropriately distinguished by a noun and only later qualified
by an adjective. Thus we say "brown dog" rather than "canine brownity".
Perhaps we do this, because it is convenient to associate
many facts with such concepts as "dog" and the expected behavior
is associated with such concepts, whereas few useful facts would
be associated with "brownity" which is useful mainly to distinguish
one object of a given primary kind from another.
This minitheory may be true or not, but if the world has
the suggested characteristics, it would be applicable to both
humans and robots. It wouldn't be Platonic, because it depends
on empirical characteristics of our world. It wouldn't be
psychological, at least in the sense that I get from Katz's
examples and those I have seen cited by the Chomskyans, because
it has nothing to do with the biological properties of humans.
It is rather independent of whether it is built-in or learned.
If it is necessary for effective communication to divide
predicates into classes, approximately corresponding to nouns
and adjectives, then either nature has to evolve it or experience
has to teach it, but it will be in natural language either way,
and we'll have to build it in to artificial languages if the
robots are to work well.
From the AI point of view, the functional constraints
on language are obviously crucial. To build robots that
communicate with each other, we must decide what linguistic
characteristics are required by what has to be communicated
and what knowledge the robots can be expected to have. It
seems unfortunate that the issue seems not to have been of
recent interest to linguists.
Is it perhaps some kind of long since abandoned
nineteenth century unscientific approach?